Goto

Collaborating Authors

 time history


Joint space-time wind field data extrapolation and uncertainty quantification using nonparametric Bayesian dictionary learning

Pasparakis, George D., Kougioumtzoglou, Ioannis A., Shields, Michael D.

arXiv.org Machine Learning

A methodology is developed, based on nonparametric Bayesian dictionary learning, for joint space-time wind field data extrapolation and estimation of related statistics by relying on limited/incomplete measurements. Specifically, utilizing sparse/incomplete measured data, a time-dependent optimization problem is formulated for determining the expansion coefficients of an associated low-dimensional representation of the stochastic wind field. Compared to an alternative, standard, compressive sampling treatment of the problem, the developed methodology exhibits the following advantages. First, the Bayesian formulation enables also the quantification of the uncertainty in the estimates. Second, the requirement in standard CS-based applications for an a priori selection of the expansion basis is circumvented. Instead, this is done herein in an adaptive manner based on the acquired data. Overall, the methodology exhibits enhanced extrapolation accuracy, even in cases of high-dimensional data of arbitrary form, and of relatively large extrapolation distances. Thus, it can be used, potentially, in a wide range of wind engineering applications where various constraints dictate the use of a limited number of sensors. The efficacy of the methodology is demonstrated by considering two case studies. The first relates to the extrapolation of simulated wind velocity records consistent with a prescribed joint wavenumber-frequency power spectral density in a three-dimensional domain (2D and time). The second pertains to the extrapolation of four-dimensional (3D and time) boundary layer wind tunnel experimental data that exhibit significant spatial variability and non-Gaussian characteristics.


Analysis and Forecasting of the Dynamics of a Floating Wind Turbine Using Dynamic Mode Decomposition

Palma, Giorgio, Bardazzi, Andrea, Lucarelli, Alessia, Pilloton, Chiara, Serani, Andrea, Lugni, Claudio, Diez, Matteo

arXiv.org Artificial Intelligence

This article presents a data-driven equation-free modeling of the dynamics of a hexafloat floating offshore wind turbine based on the Dynamic Mode Decomposition (DMD). The DMD is here used to provide a modal analysis and extract knowledge from the dynamic system. A forecasting algorithm for the motions, accelerations, and forces acting on the floating system, as well as the height of the incoming waves, the wind speed, and the power extracted by the wind turbine, is developed by using a methodological extension called Hankel-DMD, that includes time-delayed copies of the states in an augmented state vector. All the analyses are performed on experimental data collected from an operating prototype. The quality of the forecasts obtained varying two main hyperparameters of the algorithm, namely the number of delayed copies and the length of the observation time, is assessed using three different error metrics, each analyzing complementary aspects of the prediction. A statistical analysis exposed the existence of optimal values for the algorithm hyperparameters. Results show the approach's capability for short-term future estimates of the system's state, which can be used for real-time prediction and control. Furthermore, a novel Stochastic Hankel-DMD formulation is introduced by considering hyperparameters as stochastic variables. The stochastic version of the method not only enriches the prediction with its related uncertainty but is also found to improve the normalized root mean square error up to 10% on a statistical basis compared to the deterministic counterpart.


Direction-Constrained Control for Efficient Physical Human-Robot Interaction under Hierarchical Tasks

Xu, Mengxin, Wan, Weiwei, Wang, Hesheng, Harada, Kensuke

arXiv.org Artificial Intelligence

--This paper proposes a control method to address the physical Human-Robot Interaction (pHRI) challenge in the context of hierarchical tasks. A common approach to managing hierarchical tasks is Hierarchical Quadratic Programming (HQP), which, however, cannot be directly applied to human interaction due to its allowance of arbitrary velocity direction adjustments. T o resolve this limitation, we introduce the concept of directional constraints and develop a direction-constrained optimization algorithm to handle the nonlinearities induced by these constraints. The algorithm solves two sub-problems, minimizing the error and minimizing the deviation angle, in parallel, and combines the results of the two sub-problems to produce a final optimal outcome. The mutual influence between these two sub-problems is analyzed to determine the best parameter for combination. Additionally, the velocity objective in our control framework is computed using a variable admittance controller . Traditional admittance control does not account for constraints. T o address this issue, we propose a variable admittance control method to adjust control objectives dynamically. The method helps reduce the deviation between robot velocity and human intention at the constraint boundaries, thereby enhancing interaction efficiency. We evaluate the proposed method in scenarios where a human operator physically interacts with a 7-degree-of-freedom robotic arm. Compared to existing methods, our approach generates smoother robotic trajectories during interaction while avoiding interaction delays at the constraint boundaries. Recent advancements in physical Human-Robot Interaction (pHRI) have significantly improved robots' abilities to support individuals [1] [2]. For example, pHRI has shown promising results in tasks such as load transportation [3], collaborative drawing [4], surface polishing [5], assembly [6], rehabilitation [7], etc. This work was conducted while Mengxin Xu was a visiting researcher at Osaka University, Japan. It was partially supported by the Natural Science Foundation of China under Grant 62225309, 62073222, U21A20480 and U1913204. Mengxin Xu is with the Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China (e-mail: mengxin xu@sjtu.edu.cn). Weiwei Wan and Kensuke Harada are with the Department of System Innovation, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-0043, Japan (e-mail: wan@sys.es.osaka-u.ac.jp, harada@sys.es.osaka-u.ac.jp). Hesheng Wang is with the Department of Automation, the Key Laboratory of System Control and Information Processing of Ministry of Education and the Shanghai Engineering Research Center of Intelligent Control and Management, Shanghai Jiao Tong University, Shanghai 200240, China (email: wanghesheng@sjtu.edu.cn). In pHRI, the robot can reduce both the physical and cognitive load on humans, while humans contribute valuable guidance based on their experience.


A domain decomposition-based autoregressive deep learning model for unsteady and nonlinear partial differential equations

Nidhan, Sheel, Jiang, Haoliang, Ghule, Lalit, Umphrey, Clancy, Ranade, Rishikesh, Pathak, Jay

arXiv.org Artificial Intelligence

In this paper, we propose a domain-decomposition-based deep learning (DL) framework, named transient-CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs). The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers. Unlike existing state-of-the-art methods that operate on the entire computational domain, our CNN-based autoencoder computes a lower-dimensional basis for solution and condition fields represented on subdomains. Timestepping is performed entirely in the latent space, generating embeddings of the solution variables from the time history of embeddings of solution and condition variables. This approach not only reduces computational complexity but also enhances scalability, making it well-suited for large-scale simulations. Furthermore, to improve the stability of our rollouts, we employ a curriculum learning (CL) approach during the training of the autoregressive model. The domain-decomposition strategy enables scaling to out-of-distribution domain sizes while maintaining the accuracy of predictions -- a feature not easily integrated into popular DL-based approaches for physics simulations. We benchmark our model against two widely-used DL architectures, Fourier Neural Operator (FNO) and U-Net, and demonstrate that our framework outperforms them in terms of accuracy, extrapolation to unseen timesteps, and stability for a wide range of use cases.


Partial-differential-algebraic equations of nonlinear dynamics by Physics-Informed Neural-Network: (I) Operator splitting and framework assessment

Vu-Quoc, Loc, Humer, Alexander

arXiv.org Artificial Intelligence

Several forms for constructing novel physics-informed neural-networks (PINN) for the solution of partial-differential-algebraic equations based on derivative operator splitting are proposed, using the nonlinear Kirchhoff rod as a prototype for demonstration. The open-source DeepXDE is likely the most well documented framework with many examples. Yet, we encountered some pathological problems and proposed novel methods to resolve them. Among these novel methods are the PDE forms, which evolve from the lower-level form with fewer unknown dependent variables to higher-level form with more dependent variables, in addition to those from lower-level forms. Traditionally, the highest-level form, the balance-of-momenta form, is the starting point for (hand) deriving the lowest-level form through a tedious (and error prone) process of successive substitutions. The next step in a finite element method is to discretize the lowest-level form upon forming a weak form and linearization with appropriate interpolation functions, followed by their implementation in a code and testing. The time-consuming tedium in all of these steps could be bypassed by applying the proposed novel PINN directly to the highest-level form. We developed a script based on JAX. While our JAX script did not show the pathological problems of DDE-T (DDE with TensorFlow backend), it is slower than DDE-T. That DDE-T itself being more efficient in higher-level form than in lower-level form makes working directly with higher-level form even more attractive in addition to the advantages mentioned further above. Since coming up with an appropriate learning-rate schedule for a good solution is more art than science, we systematically codified in detail our experience running optimization through a normalization/standardization of the network-training process so readers can reproduce our results.


Bayesian Structural Model Updating with Multimodal Variational Autoencoder

Itoi, Tatsuya, Amishiki, Kazuho, Lee, Sangwon, Yaoyama, Taro

arXiv.org Machine Learning

A novel framework for Bayesian structural model updating is presented in this study. The proposed method utilizes the surrogate unimodal encoders of a multimodal variational autoencoder (VAE). The method facilitates an approximation of the likelihood when dealing with a small number of observations. It is particularly suitable for high-dimensional correlated simultaneous observations applicable to various dynamic analysis models. The proposed approach was benchmarked using a numerical model of a single-story frame building with acceleration and dynamic strain measurements. Additionally, an example involving a Bayesian update of nonlinear model parameters for a three-degree-of-freedom lumped mass model demonstrates computational efficiency when compared to using the original VAE, while maintaining adequate accuracy for practical applications.


Nonlinear MPC for Full-Pose Manipulation of a Cable-Suspended Load using Multiple UAVs

Sun, Sihao, Franchi, Antonio

arXiv.org Artificial Intelligence

Abstract-- In this work, we propose a centralized control method based on nonlinear model predictive control to let multiple UAVs manipulate the full pose of an object via cables. At the best of the authors knowledge this is the first method that takes into account the full nonlinear model of the load-UAV system, and ensures all the feasibility constraints concerning the UAV maximumum and minimum thrusts, the collision avoidance between the UAVs, cables and load, and the tautness and maximum tension of the cables. By taking into account the above factors, the proposed control algorithm can fully exploit the performance of UAVs and facilitate the speed of operation. Simulations are conducted to validate the algorithm to achieve fast and safe manipulation of the pose of a rigid-body payload using multiple UAVs. Most pieces of research regard mechanical design, using multiple UAVs to transport and the load as a point mass [5]-[10], with several exceptions manipulate a cable-suspended load is a significantly cheaper using a bar-shape load [11]-[13]. To carry a heavy point and more promising solution.


Data-Driven Machine Learning Models for a Multi-Objective Flapping Fin Unmanned Underwater Vehicle Control System

Lee, Julian, Viswanath, Kamal, Geder, Jason, Sharma, Alisha, Pruessner, Marius, Zhou, Brian

arXiv.org Artificial Intelligence

Flapping-fin unmanned underwater vehicle (UUV) propulsion systems provide high maneuverability for naval tasks such as surveillance and terrain exploration. Recent work has explored the use of time-series neural network surrogate models to predict thrust from vehicle design and fin kinematics. We develop a search-based inverse model that leverages a kinematics-to-thrust neural network model for control system design. Our inverse model finds a set of fin kinematics with the multi-objective goal of reaching a target thrust and creating a smooth kinematic transition between flapping cycles. We demonstrate how a control system integrating this inverse model can make online, cycle-to-cycle adjustments to prioritize different system objectives.


Learning via Long Short-Term Memory (LSTM) network for predicting strains in Railway Bridge members under train induced vibration

Dutta, Amartya, Nath, Kamaljyoti

arXiv.org Artificial Intelligence

Bridge health monitoring using machine learning tools has become an efficient and cost-effective approach in recent times. In the present study, strains in railway bridge member, available from a previous study conducted by IIT Guwahati has been utilized. These strain data were collected from an existing bridge while trains were passing over the bridge. LSTM is used to train the network and to predict strains in different members of the railway bridge. Actual field data has been used for the purpose of predicting strain in different members using strain data from a single member, yet it has been observed that they are quite agreeable to those of ground truth values. This is in-spite of the fact that a lot of noise existed in the data, thus showing the efficacy of LSTM in training and predicting even from noisy field data. This may easily open up the possibility of collecting data from the bridge with a much lesser number of sensors and predicting the strain data in other members through LSTM network.


Predicting Nonlinear Seismic Response of Structural Braces Using Machine Learning

Bas, Elif Ecem, Aslangil, Denis, Moustafa, Mohamed A.

arXiv.org Machine Learning

Numerical modeling of different structural materials that have highly nonlinear behaviors has always been a challenging problem in engineering disciplines. Experimental data is commonly used to characterize this behavior. This study aims to improve the modeling capabilities by using state of the art Machine Learning techniques, and attempts to answer several scientific questions: (i) Which ML algorithm is capable and is more efficient to learn such a complex and nonlinear problem? (ii) Is it possible to artificially reproduce structural brace seismic behavior that can represent real physics? (iii) How can our findings be extended to the different engineering problems that are driven by similar nonlinear dynamics? To answer these questions, the presented methods are validated by using experimental brace data. The paper shows that after proper data preparation, the long-short term memory (LSTM) method is highly capable of capturing the nonlinear behavior of braces. Additionally, the effects of tuning the hyperparameters on the models, such as layer numbers, neuron numbers, and the activation functions, are presented. Finally, the ability to learn nonlinear dynamics by using deep neural network algorithms and their advantages are briefly discussed.